Goto

Collaborating Authors

 disinformation campaign


AI-Powered Disinformation Swarms Are Coming for Democracy

WIRED

Advances in artificial intelligence are creating a perfect storm for those seeking to spread disinformation at unprecedented speed and scale. And it's virtually impossible to detect. In 2016, hundreds of Russians filed into a modern office building on 55 Savushkina Street in St. Petersburg every day; they were part of the now-infamous troll farm known as the Internet Research Agency . Day and night, seven days a week, these employees would manually comment on news articles, post on Facebook and Twitter, and generally seek to rile up Americans about the then-upcoming presidential election. When the scheme was finally uncovered, there was widespread media coverage and Senate hearings, and social media platforms made changes in the way they verified users.


How Russian-funded fake news network aims to disrupt election in Europe - BBC investigation

BBC News

A secret Russian-funded network is attempting to disrupt upcoming democratic elections in an eastern European state, the BBC has found. Using an undercover reporter, we discovered the network promised to pay participants if they posted pro-Russian propaganda and fake news undermining Moldova's pro-EU ruling party ahead of the country's 28 September parliamentary ballot. Participants were paid to find supporters of Moldova's pro-Russia opposition to secretly record - and also to carry out a so-called poll. This was done in the name of a non-existent organisation, making it illegal. The results of this selective sampling, an organiser from the network suggested, could lay the groundwork to question the outcome of the election.


The Download: the US office that tracks foreign disinformation is being eliminated, and explaining vibe coding

MIT Technology Review

The only office within the US State Department that monitors foreign disinformation is to be eliminated, according to US Secretary of State Marco Rubio, confirming reporting by MIT Technology Review. The Counter Foreign Information Manipulation and Interference (R/FIMI) Hub is a small office in the State Department's Office of Public Diplomacy that tracks and counters foreign disinformation campaigns. The culling of the office leaves the State Department without a way to actively counter the increasingly sophisticated disinformation campaigns from foreign governments like those of Russia, Iran, and China. What is vibe coding, exactly? When OpenAI cofounder Andrej Karpathy excitedly took to X back in February to post about his new hobby, he probably had no idea he was about to coin a phrase that encapsulated an entire movement steadily gaining momentum across the world.


Network-informed Prompt Engineering against Organized Astroturf Campaigns under Extreme Class Imbalance

Kanakaris, Nikos, Ping, Heng, Xiao, Xiongye, Ahmed, Nesreen K., Luceri, Luca, Ferrara, Emilio, Bogdan, Paul

arXiv.org Artificial Intelligence

Detecting organized political campaigns is of paramount importance in fighting against disinformation on social media. Existing approaches for the identification of such organized actions employ techniques mostly from network science, graph machine learning and natural language processing. Their ultimate goal is to analyze the relationships and interactions (e.g. re-posting) among users and the textual similarities of their posts. Despite their effectiveness in recognizing astroturf campaigns, these methods face significant challenges, notably the class imbalance in available training datasets. To mitigate this issue, recent methods usually resort to data augmentation or increasing the number of positive samples, which may not always be feasible or sufficient in real-world settings. Following a different path, in this paper, we propose a novel framework for identifying astroturf campaigns based solely on large language models (LLMs), introducing a Balanced Retrieval-Augmented Generation (Balanced RAG) component. Our approach first gives both textual information concerning the posts (in our case tweets) and the user interactions of the social network as input to a language model. Then, through prompt engineering and the proposed Balanced RAG method, it effectively detects coordinated disinformation campaigns on X (Twitter). The proposed framework does not require any training or fine-tuning of the language model. Instead, by strategically harnessing the strengths of prompt engineering and Balanced RAG, it facilitates LLMs to overcome the effects of class imbalance and effectively identify coordinated political campaigns. The experimental results demonstrate that by incorporating the proposed prompt engineering and Balanced RAG methods, our framework outperforms the traditional graph-based baselines, achieving 2x-3x improvements in terms of precision, recall and F1 scores.


Lessons for Editors of AI Incidents from the AI Incident Database

Paeth, Kevin, Atherton, Daniel, Pittaras, Nikiforos, Frase, Heather, McGregor, Sean

arXiv.org Artificial Intelligence

As artificial intelligence (AI) systems become increasingly deployed across the world, they are also increasingly implicated in AI incidents - harm events to individuals and society. As a result, industry, civil society, and governments worldwide are developing best practices and regulations for monitoring and analyzing AI incidents. The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents for different operational and research-oriented goals. This study reviews the AIID's dataset of 750+ AI incidents and two independent taxonomies applied to these incidents to identify common challenges to indexing and analyzing AI incidents. We find that certain patterns of AI incidents present structural ambiguities that challenge incident databasing and explore how epistemic uncertainty in AI incident reporting is unavoidable. We therefore report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems. With these findings, we discuss how to develop future AI incident reporting practices.


DOJ: Russia Aimed Propaganda at Gamers, Minorities to Swing 2024 Election

WIRED

In late August 2023, Ilya Gambashidze was in a conference room at the office of Social Design Agency, a Russian IT company he founded that is based in Moscow, close to the world-renowned Moscow Conservatory. Gambashidze was relatively unknown in Russian politics at the time, but just a month earlier his name had appeared on a Council of the European Union's list of Russian nationals subjected to sanctions for playing a central role in a sprawling disinformation campaign against Ukraine. In the conference room, Gambashidze was laying out his plans for a new target: along with his colleagues, he began drafting what would become known as the "Good Old USA Project." The project was supposed to influence the outcome of the US presidential election in favor of former president Donald Trump, specifically targeting certain minorities, swing state residents, and online gamers, among others, in a scheme that included a full time team dedicated to the cause. On Wednesday, Gambashidze and his company were named by the Department of Justice among the architects of a disinformation campaign known as Doppelganger that has for the last two years been targeting Ukraine, and more recently, the US elections.


Narratives at Conflict: Computational Analysis of News Framing in Multilingual Disinformation Campaigns

Sinelnik, Antonina, Hovy, Dirk

arXiv.org Artificial Intelligence

Any report frames issues to favor a particular interpretation by highlighting or excluding certain aspects of a story. Despite the widespread use of framing in disinformation, framing properties and detection methods remain underexplored outside the English-speaking world. We explore how multilingual framing of the same issue differs systematically. We use eight years of Russia-backed disinformation campaigns, spanning 8k news articles in 4 languages targeting 15 countries. We find that disinformation campaigns consistently and intentionally favor specific framing, depending on the target language of the audience. We further discover how Russian-language articles consistently highlight selected frames depending on the region of the media coverage. We find that the two most prominent models for automatic frame analysis underperform and show high disagreement, highlighting the need for further research.


African Democracy in the Era of Generative Disinformation: Challenges and Countermeasures against AI-Generated Propaganda

Okolo, Chinasa T.

arXiv.org Artificial Intelligence

In light of prominent discourse around the negative implications of generative AI, an emerging area of research is investigating the current and estimated impacts of AI-generated propaganda on African citizens participating in elections. Throughout Africa, there have already been suspected cases of AI-generated propaganda influencing electoral outcomes or precipitating coups in countries like Nigeria, Burkina Faso, and Gabon, underscoring the need for comprehensive research in this domain. This paper aims to highlight the risks associated with the spread of generative AI-driven disinformation within Africa while concurrently examining the roles of government, civil society, academia, and the general public in the responsible development, practical use, and robust governance of AI. To understand how African governments might effectively counteract the impact of AI-generated propaganda, this paper presents case studies illustrating the current usage of generative AI for election-related propaganda in Africa. Subsequently, this paper discusses efforts by fact-checking organisations to mitigate the negative impacts of disinformation, explores the potential for new initiatives to actively engage citizens in literacy efforts to combat disinformation spread, and advocates for increased governmental regulatory measures. Overall, this research seeks to increase comprehension of the potential ramifications of AI-generated propaganda on democratic processes within Africa and propose actionable strategies for stakeholders to address these multifaceted challenges.


OpenAI says Russian and Israeli groups used its tools to spread disinformation

The Guardian

OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran. Malicious actors used the company's generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report. As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation. Artificial intelligence companies such as OpenAI, which makes ChatGPT, have tried with mixed results to assuage these concerns and place guardrails on their technology.


Why China Is So Bad at Disinformation

WIRED

"China will use AI to disrupt elections in the US, South Korea and India, Microsoft warns" one read. "China Is Using AI to Sow Disinformation and Stoke Discord Across Asia and the US," another claimed. The headlines were based on a report published earlier this month by Microsoft's Threat Analysis Center which outlined how a Chinese disinformation campaign was now utilizing artificial technology to inflame divisions and disrupt elections in the US and around the world. The campaign, which has already targeted Taiwan's elections, uses AI-generated audio and memes designed to grab user attention and boost engagement. But what these headlines and Microsoft itself failed to adequately convey is that the Chinese government-linked disinformation campaign, known as Spamouflage Dragon or Dragonbridge, has so far been virtually ineffective.